18 research outputs found

    Expect AI/MLOps Complexity

    Get PDF
    An article from CHIPS, the Department of the Navy information technology magazin

    New Generation of Instrumented Ranges: Enabling Automated Performance Analysis

    Get PDF
    Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.Office of Naval Researc

    Collected Guidelines for Master's Theses

    Get PDF
    Thesis Writing DocumentThis page attempts to explain how to write a Master's Thesis at NPS: the process, timelines, content, style, and format. I compiled it with the help from Susan Sanchez, Mike McCauley, Cynthia Irvine, John Powers, Amela Sadagic, Neil Rowe, Kevin Squire, Daphne Kapolka, and Sam Buttrey. Thank you! Mathias. First and foremost, check the official NPS thesis requirements, currently at http://www.nps.edu/research/research1.html. That web page also lists many useful resources, including workshops, document templates, and guidelines. The following links and tips are grouped by their main content, but most documents cover more than one topic.Approved for public release; distribution is unlimited

    Video Requirements for Web-based Virtual Environments using extensible 3D (X3D) graphics

    Get PDF
    Submitted to W3C Video on the Web WorkshopReal-time interactive 3D graphics and virtual environments typically include a variety of multimedia capabilities, including video. The Extensible 3D (X3D) Graphics is an ISO standard produced by the Web3D Consortium that defines 3D scenes using a scene-graph approach. Multiple X3D file formats and language encodings are available, with a primary emphasis on XML for maximum interoperability with the Web architecture. A large number of functional capabilities are needed and projected for the use of video together with Web-based virtual environments. This paper examines numerous functional requirements for the integrated use of Web-compatible video with 3D. Three areas of interest are identified: video usage within X3D scenes, linking video external to X3D scenes, and generation of 3D geometry from video

    Method and apparatus for computer vision analysis of cannon-launched artillery video

    Get PDF
    PatentAn automated method to quantify the pitching and yawing motion of a projectile during ballistic flight using two camera/tracker video systems. Image processing tools are used to segment the shape of the projectile in each frame of a launch video, which allows the location and observed pitch angle to be calculated with sub-pixel accuracy. Subsequent automated analysis uses the history of the projectile location and the pitching behavior to calculate estimates for the epicyclic motion, as well as other ballistic parameters such as aeroballistic coefficients. Using two cameras located at different orthographic views of the line-of-fire (LOF) allows the pitching and yawing motion history of the projectile to be calculated in three dimensions (3D). In addition, input of the camera locations, cannon trunnion location, and the cannon pointing direction allows for automatic correction for camera misalignment

    IPARTS Overview Improved Performance and Readiness Training System

    Get PDF
    MOVES Research & Education Systems Seminar: Presentation; Session 3c: Human Systems and Training (Operational Systems); Moderator: Mike McCauley; Introduction to and a Progress Report on Transitioning IPARTS; speakers: Mathias Kolsch & Mike McCaule

    Developing a Low-Cost, Portable Virtual Environment for Aircraft Carrier Launch Officers

    Get PDF
    The article of record as published may be found at http://dx.doi.org/10.1177/1541931215591398Proceedings of the Human Factors and Ergonomics Society 59th Annual Meeting - 2015The primary purpose of a United States aircraft carrier is to transport its embarked air wing in order to project combat power through the launch and recovery of various aircraft. In order to get airborne, the air wing depends upon the skills of a small number of officers responsible for the safe and rapid launch of aircraft from the carrier deck. These officers, known as “shooters”, receive initial classroom training on the systems they use then receive qualification to be launch officers through on-the-job training. Due to scheduling complexities the training to achieve qualification is disjointed and often requires trainees to go underway with different aircraft carriers to complete their training. The current approach results in burdens on the parent command, host commands, and the trainees. Of greater concern is the lack of consistency in the training of such a high risk activity. This paper describes the results of a job task analysis conducted to provide insights into the skills required to perform the duties of a launch officer. Further, the information from the job task analysis was examined and a representative finite state machine was developed and is presented. Finally, a portable, low-cost virtual environment created based on the work described above is discussed. It is proposed that the current virtual reality system used for this demonstration faithfully recreates the required attributes and scenarios to train launch officer tasks and that the prototype system, with proof of training transfer can reduce the burden on commands, trainees, and perhaps most importantly, provide consistent training

    Method and apparatus for computer vision analysis of spin rate of marked projectiles

    Get PDF
    PatentMethods and systems for determining a spin rate of a projectile are provided. A receiving module receives a video of a launch of a projectile in which the projectile having at least one marking disposed thereon. A segmenting module segments a plurality of shapes of the projectile from the video. The segmenting module includes a sensitivity module configured to increase a sensitivity of the video to find the projectile in the video. An extraction module extracts at least one column of pixels of the at least one marking from the shapes. A generating module generates a stripe pixel column history image from the at least one column of pixels. A calculating module calculates a spin rate of the projectile from the stripe pixel column history image
    corecore